Convergence as Sample Size Grows
This lecture is about a
By the end, you should be able to
normality
But normality
Example: positive outcomes and positive regressors: how can \(U_{it}\) be normal?
Normality was useful?
If we knew some other distribution, would be nice
But we cannot
?
Recall
The very key tool
Converges to \(\E[\bX_i\bX_i']^{-1}\E[\bX_i\bX_i]\)
No “model”, no “potential outcomes” — just correlations
Can always say!
\(\bbeta + \E[\bX_i\bX_i']\E[\bX_i\bU_i(\bbeta)]\)
Model-free in teh sense that by itself writing \(Y_i = \bX_i'\bbeta + \bU_i(\bbeta)\) does not say anything (a bit like writing \(5 = X + (5-X)\))
If we want \(\bbeta\) to have any causal meaning, we need a casual framework
So let’s make the usual assumption
In this class we will maintain SUTVA — no general equilbrium, “only your known treatment matters”
Combining the steps together \[ \hat{\bbeta} \xrightarrow{p} \bbeta \]
Let’s think about the proof again
We don’t actually need \(\E[U_i|\bX_i]=0\)
Sufficient to have \[ \E[\bX_iU_i] = 0 \] This is \(k\) conditions now — one per component of \(\bX_i\)
Things go through
Still can specify the potential outcomes frameworok
But we lose mean interpretation: \[ \E[Y_i|\bX_i] = \bX_i'\bbeta + \E[U_{it}|\bX_i] \] Now maybe $ $ is not zero.
In finite samples you may have bias! Consistency result shows that in the limit you are still estimating the correct thing
A Deeper Look at Linear Regression: Consistency